Install a Helm Chart on GKE
Learn how to install a Kanban application using Helm on Google Kubernetes Engine.
We'll cover the following
Install Kanban on GKE#
We’ll use our umbrella chart to install an application with only one command. Yes, that’s right. With only one command we can install an application on a cloud!
Here is a summary of the most important commands that can help us manage a cluster.
Use the following commands to log in and set up gcloud:
- Log in to
gcloudusinggcloud auth login. - Create a cluster using using
gcloud container clusters create-auto kanban-cluster --region=<YOUR-REGION> --project=<YOUR-PROJECT-ID>. - Update
kubectlconfig usinggcloud container clusters get-credentials kanban-cluster --region=<YOUR-REGION> --project=<YOUR-PROJECT-ID>. - Delete cluster using
gcloud container clusters delete kanban-cluster --region=<YOUR-REGION> --project=<YOUR-PROJECT-ID>.
And here is the command to install the Kanban application, which might take a couple of minutes:
The output will be as follows:
Release "kanban" does not exist. Installing it now.
W0107 06:53:14.496052 22612 warnings.go:70] Autopilot set default resource requests for Deployment kanban/kanban-frontend, as resource requests were not specified. See http://g.co/gke/autopilot-defaults.
W0107 06:53:14.497283 22612 warnings.go:70] Autopilot set default resource requests for Deployment kanban/kanban-backend, as resource requests were not specified. See http://g.co/gke/autopilot-defaults.
W0107 06:53:14.609664 22612 warnings.go:70] Autopilot increased resource requests for StatefulSet kanban/postgres to meet requirements. See http://g.co/gke/autopilot-resources.
NAME: kanban
LAST DEPLOYED: Fri Jan 7 06:53:13 2022
NAMESPACE: kanban
STATUS: deployed
REVISION: 1
Thank you for running Kanban application!
1. Establish port-forwarding:
> kubectl port-forward svc/kanban-frontend 8080:8080 --namespace kanban
2. Enter the page http://localhost:8080/
If you encounter, Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused make sure that you execute a following command:
Let’s try it out in the playground below:
/
The output will be as follows:
NAME READY STATUS RESTARTS AGE
pod/kanban-backend-69fcf4979b-5hbdk 1/1 Running 1 3m52s
pod/kanban-backend-69fcf4979b-zxfpr 1/1 Running 0 3m52s
pod/kanban-frontend-55cf65f8f8-rqcmk 1/1 Running 0 3m52s
pod/postgres-0 1/1 Running 0 3m52s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kanban-backend ClusterIP 10.10.129.122 <none> 8080/TCP 3m52s
service/kanban-frontend ClusterIP 10.10.131.199 <none> 8080/TCP 3m52s
service/postgres ClusterIP 10.10.128.77 <none> 5432/TCP 3m52s
service/postgres-headless ClusterIP None <none> 5432/TCP 3m52s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kanban-backend 2/2 2 2 3m52s
deployment.apps/kanban-frontend 1/1 1 1 3m52s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kanban-backend-69fcf4979b 2 2 2 3m53s
replicaset.apps/kanban-frontend-55cf65f8f8 1 1 1 3m53s
NAME READY AGE
statefulset.apps/postgres 1/1 3m53s
We can drill down to get more details, e.g., for kanban-backend deployment, as shown below:
The output will be as follows:
Name: kanban-backend
Namespace: kanban
CreationTimestamp: Fri, 07 Jan 2022 13:56:49 +0100
Labels: app=kanban-backend
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kanban-backend
app.kubernetes.io/version=kanban-1
group=backend
Annotations: autopilot.gke.io/resource-adjustment:
{"input":{"containers":[{"name":"kanban-backend"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2G...
deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: kanban
meta.helm.sh/release-namespace: kanban
Selector: app=kanban-backend
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=kanban-backend
app.kubernetes.io/name=kanban-backend
app.kubernetes.io/version=kanban-1
group=backend
Containers:
kanban-backend:
Image: wkrzywiec/kanban-app:helm-course
Port: 80/TCP
Host Port: 0/TCP
Limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Environment:
DB_SERVER: postgres
POSTGRES_DB: kanban
POSTGRES_PASSWORD: kanban
POSTGRES_USER: kanban
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: kanban-backend-69fcf4979b (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 4m49s deployment-controller Scaled up replica set kanban-backend-69fcf4979b to 2
If we want to test an application by ourselves, the only thing to do is to set up port-forwarding, as it was done previously (shown below):
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
However, on the platform, execute the following command:
So locally, if we enter http://localhost:8443 in a web browser or, on the platform, we click on the URL provided in the widget, we should be able to see Kanban’s main page, as shown below:
On Google Cloud#
We should be sure by now that the application is working. But before moving to the next lesson, we would recommend another check in the Google Cloud dashboard. To do that, go to https://console.cloud.google.com, and from the left menu select “Kubernetes Engine” (as shown below).
We should see a list of clusters with only a single entry. On the left, a new menu appears. We can use it to get a more detailed insight into our clusters, similar to the Kubernetes Dashboard. Here is a view of the “Workloads” page which lists all the Deployments and StatefulSets.
We can select one of them, e.g., kanban-backend, and we’ll be directed to the detailed page. We can get more insight about the Deployment, including all the Pods that are part of it, as shown below.
If we select one of them, we are then redirected to the Pod’s detailed page
And if we click “Logs” we can get access to all the logs that are produced by our application, as shown below:
Creating a GKE cluster
Config Helm Release for GKE Cluster